On Derivation of MLP Backpropagation from the Kelley-Bryson Optimal-Control Gradient Formula and Its Application
نویسندگان
چکیده
The well-known backpropagation (BP) derivative computation process for multilayer perceptrons (MLP) learning can be viewed as a simplified version of the Kelley-Bryson gradient formula in the classical discrete-time optimal control theory [1]. We detail the derivation in the spirit of dynamic programming, showing how they can serve to implement more elaborate learning whereby teacher signals can be presented to any nodes at any hidden layers, as well as at the terminal output layer. We illustrate such an elaborate training scheme using a small-scale industrial problem as a concrete example, in which some hidden nodes are taught to produce specified target values. In this context, part of the hidden layer is no longer “hidden.”
منابع مشابه
Overfitting and Neural Networks: Conjugate Gradient and Backpropagation
Methods for controlling the bias/variance tradeoff typically assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural networks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of weight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary...
متن کاملOn-line Adaptive Learning Rate Bp Algorithm for Mlp and Application to an Identification Problem
An on-line algorithm that uses an adaptive learning rate is proposed. Its development is based on the analysis of the convergence of the conventional gradient descent method for threelayer BP neural networks. The effectiveness of the proposed algorithm applied to the identification and prediction of behavior of non-linear dynamic systems is demonstrated by simulation experiments.
متن کاملOptimal integrated passive/active design of the suspension system using iteration on the Lyapunov equations
In this paper, an iterative technique is proposed to solve linear integrated active/passive design problems. The optimality of active and passive parts leads to the nonlinear algebraic Riccati equation due to the active parameters and some associated additional Lyapunov equations due to the passive parameters. Rather than the solution of the nonlinear algebraic Riccati equation, it is proposed ...
متن کاملNumerical Solution of Optimal Heating of Temperature Field in Uncertain Environment Modelled by the use of Boundary Control
In the present paper, optimal heating of temperature field which is modelled as a boundary optimal control problem, is investigated in the uncertain environments and then it is solved numerically. In physical modelling, a partial differential equation with stochastic input and stochastic parameter are applied as the constraint of the optimal control problem. Controls are implemented ...
متن کاملA Study on Neural Network Training Algorithm for Multiface Detection in Static Images
This paper reports the study results on neural network training algorithm of numerical optimization techniques multiface detection in static images. The training algorithms involved are scale gradient conjugate backpropagation, conjugate gradient backpropagation with Polak-Riebre updates, conjugate gradient backpropagation with Fletcher-Reeves updates, one secant backpropagation and resilent ba...
متن کامل